Factor-Adjusted Regularized Model Selection
نویسندگان
چکیده
منابع مشابه
Model selection via standard error adjusted adaptive lasso
The adaptive lasso is a model selection method shown to be both consistent in variable selection and asymptotically normal in coefficient estimation. The actual variable selection performance of the adaptive lasso depends on the weight used. It turns out that the weight assignment using the OLS estimate (OLS-adaptive lasso) can result in very poor performance when collinearity of the model matr...
متن کاملEfficient Model Selection for Regularized Classification by Exploiting Unlabeled Data
Hyper-parameter tuning is a resource-intensive task when optimizing classification models. The commonly used k-fold cross validation can become intractable in large scale settings when a classifier has to learn billions of parameters. At the same time, in real-world, one often encounters multi-class classification scenarios with only a few labeled examples; model selection approaches often offe...
متن کاملOn model selection consistency of regularized M-estimators
Penalized M-estimators are used in many areas of science and engineering to fit models with some low-dimensional structure in high-dimensional settings. In many problems arising in machine learning, signal processing, and high-dimensional statistics, the penalties are geometrically decomposable, i.e. can be expressed as a sum of support functions. We generalize the notion of irrepresentability ...
متن کاملModel Selection for Regularized Least-Squares Algorithm in Learning Theory
We investigate the problem of model selection for learning algorithms depending on a continuous parameter. We propose a model selection procedure based on a worst case analysis and data-independent choice of the parameter. For regularized least-squares algorithm we bound the generalization error of the solution by a quantity depending on few known constants and we show that the corresponding mo...
متن کاملRisk Bounds and Model Selection for Regularized Least Squares
Motivation: A central problem in learning theory is a quantitative assessment of the inference property of a learning algorithm ensuring consistency. A number of seminal works show that the essential feature of an algorithm should be its capacity of controlling the complexity of the solution, this is usually realized introducing a parametric family of learning algorithms in which the parameters...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: SSRN Electronic Journal
سال: 2018
ISSN: 1556-5068
DOI: 10.2139/ssrn.3248047